Goto

Collaborating Authors

 total amount



704cddc91e28d1a5517518b2f12bc321-AuthorFeedback.pdf

Neural Information Processing Systems

We thank the reviewers for their feedback. We will first respond to shared and then to individual comments. Additionally, reviewers 2 and 3 requested clarification regarding the advantages of DCA over other methods. For instance, one could attempt to correlate each neuron's contribution to the DCA subspace with single-neuron Studying the behavior of Kernel DCA is a direction for future studies. Additionally, we found and corrected a minor bug in Figure 1A: the SFA and DCA lines are now blue and red, respectively.









Efficient Test-Time Scaling for Small Vision-Language Models

Kaya, Mehmet Onurcan, Elliott, Desmond, Papadopoulos, Dim P.

arXiv.org Artificial Intelligence

Small Vision-Language Models (VLMs) provide a computationally efficient alternative to larger models, at the cost of weaker generalization abilities and downstream task performance. These shortcomings could be addressed by test-time scaling techniques, but existing methods are typically computationally demanding, contradicting the resource-efficient design goals of small models. To address these limitations, we propose two novel and efficient test-time scaling strategies that leverage the model-internal features rather than external supervision: (i) Test-Time Augmentation (TTAug), which generates multiple augmented inputs and aggregates outputs at the token level without parameter updates, and (ii) Test-Time Adaptation (TTAdapt), which adapts model parameters during inference using consensus-based pseudolabels from TTAug. Through extensive experiments across nine benchmarks, we demonstrate consistent performance improvements while maintaining computational efficiency suitable for resource-constrained environments. The generality of our approach is demonstrated both within models at different scales and across different VLMs without additional tuning.